Recurrence-Relation-Based Reward Model for Performability Evaluation of Embedded Systems
نویسندگان
چکیده
Many embedded systems behave as discrete-time semi-Markov processes (DTSMPs). For those systems, performability measures, especially when specified as an accumulated reward, are often difficult to evaluate analytically. In this article, we informally describe an approach that uses a recurrence-relation-based (RRB) reward model for performability evaluation of systems exhibiting DTSMP behavior. We explain how an RRB reward model is constructed and how such a model can be solved analytically. I. MOTIVATION Performability evaluation of embedded systems is important because 1) their performance is often gracefully degradable, and 2) a value or timing failure may result in severe consequences if such a system hosts a mission-critical application. Moreover, interval-of-time accumulated reward measures are well suited to performability modeling of such systems, because 1) embedded systems typically execute in open or closed loops or cycles1 each of which corresponds to a unit of time and accommodates at most one state transition, and 2) it is meaningful to quantify a mission’s worth in terms of accomplished duty cycles. However, performability modeling for those systems can be difficult. In particular, while embedded systems have relatively simple architectures and functionalities, the behavior of an embedded system is often non-Markovian in nature. For example, an application may require its host to take a particular action with a specified frequency or to be engaged in a specific operation through a pre-designated period of time, which implies a nonhomogeneous transition probability and a deterministic sojourn time, respectively. While those non-Markovian properties can be circumvented using the notion of embedded Markov chain (as such a process behaves just like an ordinary Markov process at the instants of state transition), performability measures based on accumulated reward can yet be difficult to solve analytically. In addition, embedded applications may involve path-dependent behavior, which prevents a reward model from being analytically manageable. Although simulation-based performability modeling tools are flexible and powerful, using simulation is usually time-consuming and may lose its advantages when a modeler desires to get insights from analytic results such as a reachability graph or symbolic solutions which are unlikely to be obtained from a simulation. With the above observations, we develop an approach that uses a recurrence-relation-based reward model to represent a system’s DTSMP behavior and to obtain performability-measure solutions. Previously, we leveraged recurrence relations for reliability assessment of a fault-tolerant bus architecture for an avionics system [1]. In addition, we built and solved a recurrence-relationbased reward model to evaluate performability in terms of expected accumulated reward for a distributed embedded system [2]. More recently, we have been further investigating the idea and attempting to generalize it to a certain degree. The remainder of the article is organized as follows. Section II informally describes the basic elements of an RRB reward model and how it is constructed. Section III explains how such a 1In the remainder of the text, the words “loop,” “cycle,” “frame,” and “iteration” are used interchangeably. model can be solved analytically for performability measures based on an example. Section IV briefly presents an application to exemplify the applicability of RRB reward models to embedded systems. Section V concludes this extended abstract. II. RRB REWARD MODEL As mentioned in Section I, a major class of embedded systems can be represented by DTSMPs. More specifically, such systems have the following characteristics which traditional Markov reward models may not be able to handle: C1) Pathand time-dependent transitions: A transition from Sj in the i cycle will be dependent of the path via which the system entered to Sj (prior to cycle i) or dependent of the elapsed time since the most recent traversal of a specific path. C2) Deterministic or nondeterministic time-triggered transitions: A transition T will be enabled in the i cycle only if it is a pre-designated cycle during which T is allowed to fire. If T will fire with probability 1 in a pre-designated cycle or frequency, the transition is a deterministic time-triggered transition, otherwise it is a nondeterministic time-triggered transition. C3) Deterministic sojourn times: The duration (quantified in cycles) through which the system in question will be in a particular state Sk is a constant. Accordingly, a reward model for the type of semi-Markov process we are concerned with should be able to 1) allow an impulse reward to be accrued at the end of each cycle (which is also the epoch of a new cycle), and 2) support interval-of-time performability measures such as expected accumulated reward and instant-of-time measures such as the probability that the system will be in Sk at the end of cycle i. In order to construct recurrence-relation-based reward models, we use a state diagram to specify a DTSMP and introduce the following basic elements to represent the system characteristics described above: 1) indexed transition probability, 2) state-entry probability, 3) colored arc, and 4) indicator variable. The role of each basic element is described in the following subsections. A. Indexed Transition Probabilities An indexed state transition probability is expressed as γ[i], where i refers to the i execution cycle, j and k are state identification numbers, and (j, k) means a transition from state Sj to state Sk (that occurs in cycle i). By labeling arcs using γ[i], we are able to 1) draw a state diagram such as the one shown in Figure 1, and 2) derive pathand/or elapsed-time-dependent transition probabilities. Furthermore, coupled with Pk[i], the probability that the system in question will be in Sk at the end of cycle i, γ[i] allows us to derive recurrence relations. In turn, the recurrence relations enable us to obtain an analytic reward-model solution, as explained in Section III. B. State-Entry Probability A state-entry probability, denoted as P̂(j,k)[i], is the likelihood that the system will enter into state k from state j, k 6= j, in cycle i. This probability enables us to simplify the construction and solution of a DTSMP model in which one or more states are characterized by deterministic sojourn times. In particular, from model construction perspective, P̂(j,k)[i] is important since it helps leverage the embedded Markov process in an RRB model (recall that each cycle accommodates one and only one transition). From model solution perspective, P̂(j,k)[i] simplifies the derivation of recurrence relations and computation of accumulated reward for such a DTSMP, as shown in Sections III-B and III-C.
منابع مشابه
A Multiprocessor System with Non-Preemptive Earliest-Deadline-First Scheduling Policy: A Performability Study
This paper introduces an analytical method for approximating the performability of a firm realtime system modeled by a multi-server queue. The service discipline in the queue is earliestdeadline- first (EDF), which is an optimal scheduling algorithm. Real-time jobs with exponentially distributed relative deadlines arrive according to a Poisson process. All jobs have deadlines until the end of s...
متن کاملPerformability analysis of guarded-operation duration: a translation approach for reward model solutions
Performability measures are often defined for analyzing the worth of fault-tolerant systems whose performance is gracefully degradable. Accordingly, performability evaluation is inherently well-suited for application of reward model solution techniques. On the other hand, the complexity of performability evaluation for solving engineering problems may prevent us from utilizing those techniques ...
متن کاملPerformability Evaluation Low-Powered Sensor Node by Stochastic Model Checking
Wireless Sensor Network (WSN) applications, there may be many low-powered sensor nodes which can communicate with each other by wireless techniques. Due to limited power supply, the satisfiability of performability properties, which include energy constraints especially, must be confirmed in design phase. This will help one to avoid implementing impracticable designs. A typical performability p...
متن کاملMultiprocessor Performability Analysis
Conclusions Performability models of multiprocessor systems and their evaluation are presented. Two cases in which hierarchical modeling is applied are examined. 1. Models are developed to analyze the behavior of processor arrays of various sizes in the presence of permanent, transient, intermittent, and near-coincident faults. Models can be generated for typical reconfiguration schemes that co...
متن کاملPerformability Analysis Us ing Semi-Markov Reward Processes
With the increasing complexity of multiprocessor and distributed processing systems, the need to develop efficient and accurate modeling methods is evident. Fault tolerance and degradable performance of such systems has given rise to considerable interest in models for the combined evaluation of performance and reliability [l], [2]. Markov or semi-Markov reward models can be used to evaluate th...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2007